Greg Detre
Monday, February 24, 2003
The
physical stance is that of the scientist � we try and predict the behaviour of
objects around us according to hard, low-level scientific laws. It is reliable,
but requires an enormous amount of knowledge about the world, and over time or
on a large scale quickly becomes unfeasible.
The design
stance makes the assumption that the object has been designed to match some
specification or expectation, allowing us to ignore most of its physical
implementation details and just assume it will work as it appears to be
designed to. Dennett�s paradigmatic example is the alarm clock � we don�t care
what it looks like, as long as we can recognise the means of telling the time,
setting the alarm and the alarm itself etc. In unexceptional circumstances, the
design stance allows us a reliable predictive shortcut that is considerably
less expensive than the physical stance, and therefore usable on larger scales.
Dennett�s
central thesis is that what it is to be a believer is to be a system well
described by the intentional stance. He describes the intentional stance as
allowing us to predict the behaviour of even more complicated systems, as long
as they behave rationally. Rationality here is �practical rationality�, i.e.
doing that which you believe will satisfy your desires. In order to employ the
intentional stance/strategy, we first assume the rationality of an intentional
system, guess at its beliefs based on what we know about its situation, purpose
and awareness of the world, and then its desires based on similar
considerations. There is room within this definition for a restricted
rationality, which takes into account only a limited subset of beliefs (e.g.
relevant or false ones), or does not fully explore their implications and
potential contradictions. This is analogous to having flawed systems, or
limited knowledge, at the physical or design stances � our predictive power is
weakened without necessarily collapsing � that is, as long as the false,
incomplete or inconsistent beliefs are in the minority.
Importantly,
he goes on to consider whether this definition of an intentional system as
something which acts as it ought to do actually excludes anything at
all. This is easy though � we can exclude the lecturn simply because we get no
new predictive power from using the intentional strategy upon it, whereas in
the case of people and animals, using the intentional strategy is our only
practical option. This brings up a problem though. Dennett wants to argue that
if the intentional strategy works on a system, that system is a believer. If,
however, our use of the intentional strategy for some complicated system is
only necessary because we are not clever enough to internalise its complexity
and use the physical stance, then it appears as though whether something is a
believer or not is dependent upon the epistemological power of the observer.
This can be
seen as an �interpretationist� stance with regard to beliefs � that is, whether
or not something should be considered a belief is in some way dependent upon
the context, the observer, the state of the world, your preferences, or some
other relativisation. In contrast, realism about beliefs holds that whether or
not something is a belief, or a system is a believer, is some objective matter
whose truth is independent of the intelligence of the observer or what the
reader had for breakfast. However, Dennett�s position is subtle enough to evade
these binary extremes: �the decision to adopt the intentional stance is free,
but the facts about the success or failure of the stance, were one to adopt it,
are perfectly objective� (pg 67). In other words, super-intelligent autistic
nerd-Martians may scorn the intentional stance as the predictive shortcut of
sub-intelligent humans, but they could use it if they chose, and if they did,
their ascription of systems as believers would then be the same as ours. In
fact, more or less any system that we would want to consider intelligent must
necessarily at least consider itself from the intentional stance � �if they
observe, theorize, predict, communicate, they view themselves as
intentional systems�.
As far as I
can tell, Dennett�s mild realist stance says that in degenerate cases (internal
inconsistency of beliefs etc.), different cultures/people may disagree on which
beliefs and desires a system has, but they will always agree whether or not a
system is degenerate. I find it difficult to follow his discussion of possible
alternative intentional strategies for predicting the behavior of a person,
since I can�t imagine an alternative to practical rationality.
However, I
think he does an excellent job of showing why, for complex and interesting
intentional systems, we can be fairly safe in trusting to our current
intentional strategy as a source of consensus.
To
illustrate this, he starts by describing how a simple thermostat is loosely
coupled to its environment. If we reattach its temperature sensor input to a
water level sensor input, we radically alter the semantics of its internal
states. However, once we give it multiple inputs which consistently measure the
same quantity in different modalities, and perhaps different ways of affecting
the world to produce broadly the same result, we enrich and embed the semantics
of its internal states. In other words, the �class of indistinguishably
satisfactory models of the formal system embodied in its internal states gets
smaller and smaller as we add such complexities�, and so we can reliably and
consensually ascribe it certain beliefs using the intentional stance. This
tight relationship between the organisation of the system and the environment,
resulting in systematic responses to that environment, is one way to see what
we mean by a �representation� of that environment.
This discussion
also speaks to one of the great strengths and weaknesses of Dennett�s
definition of belief in terms of performance � that is, he appears to be saying
that some enormous lookup table designed to act in one of a large number of
prescribed ways would be a believer according to the intentional stance (cf
Block�s �Aunt Bertha� Turing test machine). This feels wildly counter-intuitive
at first, but when we start to consider that such a machine would be
considerably less flexibly and richly-embedded in its environment, and should
be ascribed correspondingly less rich and firm belief states.
Bloom ends
by pointing out that �learning a word, after all, is a social act � an
arbitrary convention shared by a community of speakers, something that is true only
because of the contents of other people�s minds. Even if we knew nothing else,
this simple fact should make us sympathetic to the idea that theory of midn
might be intimately related to the process of word learning�. He develops this
thesis carefully, ending with the most speculative conclusions.
Children
are able to successfully learn words for objects under conditions of minimal
exposure (fast-mapping) from as young as 1 or 2 years is evidence of some
pretty powerful learning abilites. The conventional view is that this is done
by associating the object with its name over repeated instances, eventually
narrowing down to just the particular object that had been present each time.
Bloom argues persuasively that young children may actually be relying upon
inferences about the referential intentions of other people. This would be a
far more powerful method, because co-occurrence requires many exposures, and
can often lead to false associations being made.
Bloom cites
a number of experiments that seem to strongly support the idea that children
are using higher-level, more intentional cues than simply association. For
instance, Baldwin (1991) showed that children associate the name with the
object that the adult was attending to, rather than the object the child was
looking at. Furthermore, they do not associate a name uttered by a disembodied
voice with a lone, novel object. Finally, Bloom argues that the abilities of
autistic children provide further evidence that children employ theory of mind
when learning the names of objects. Given that a prominent explanation of
autism is that it is the produce of a delayed, impaired or non-existent theory
of mind (Baron-Cohen, Leslie and Frith, 1985), it seems appropriate that
autistic children often have no language at all, use entire phrases in a
parroted way (e.g. �Do you want a biscuit?� instead of �I want a biscuit�), as
well as making unusual errors in a �simple associative way� (e.g. calling a
truck �a sausage� apparently ebcause his mother had said �Tommy, come and eat
your sausage� as the boy was looking at his truck�). This was confirmed by
Baron-Cohen et al.�s (1997) discrepant looking experiment where the autistic
children mapped the word onto the object they were attending to, rather than
the one the speaker was attending to.
I feel that
Bloom�s case against pure associationist learning is pretty strong, but I felt
that he might have mentioned other learning possibilities that children appear
to take advantage that don�t involve notions as expensive as theory of mind.
The best example I know of is from Naigles (1990), which suggested that
children can learn some of a verb�s meaning from the syntax of the sentence in
which it�s used. In short, his experiment involved showing 24-month olds a
video of a rabbit feeding a duck, in conjunction with the sentences:
1.�� �the rabbit is zorking the
duck�
2.�� �the duck is zorking�
They
appeared to correspondingly interpret �zorking� to either mean �feeding� or
�eating�. This is admittedly a different problem from that of naming objects,
and may require the child to have already learned the names of various objects,
but I feel that it shows how children could be doing a sort of constraint
satisfaction between the high-level structural knowledge of the environment and
the known syntactic categories of the sentence. In conjunction with prosodic
cues, and perhaps some genetic or learned non-linguistic expectation of
subject/object agents, actions and number etc., we can see that associationist
learning is not the only learning strategy available to children.
Of course,
this is not to say that these theories could explain away Bloom�s strong
evidence supporting children�s use of referential intention and knowledge of
others� knowledge of the world. My main concern here is that while Bloom argues
for a general, unitary theory of mind module, the staggered progress children
make in theory of mind tasks as they age would seem to suggest that our theory
of mind starts as a more fragmented, domain-specific set of hacks.
I won�t
discuss the second half of his paper in as much detail, but Bloom goes further,
arguing that the bias against lexical overlap might not be a specifically
linguistic rule, but rather a general theory of mind expectation that other
people will generally try to be informative. Similarly, children�s assumption
that new words refer to entire objects could be a result of their perceptual
system naturally dividing up the world the way ( �conceptual bias�) , or
because they assume that other people when they use words typically intend to
refer to objects (the �intentional bias� proposal), or some mix or
inter-dependency between the two.